PACL: Piecewise Arc Cotangent Decay Learning Rate for Deep Neural Network Training

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Two Novel Learning Algorithms for CMAC Neural Network Based on Changeable Learning Rate

Cerebellar Model Articulation Controller Neural Network is a computational model of cerebellum which acts as a lookup table. The advantages of CMAC are fast learning convergence, and capability of mapping nonlinear functions due to its local generalization of weight updating, single structure and easy processing. In the training phase, the disadvantage of some CMAC models is unstable phenomenon...

متن کامل

Reinforced backpropagation for deep neural network learning

Standard error backpropagation is used in almost all modern deep network training. However, it typically suffers from proliferation of saddle points in high-dimensional parameter space. Therefore, it is highly desirable to design an efficient algorithm to escape from these saddle points and reach a good parameter region of better generalization capabilities, especially based on rough insights a...

متن کامل

Deep neural network training emphasizing central frames

It is common practice to concatenate several consecutive frames of acoustic features as input of a Deep Neural Network (DNN) for speech recognition. A DNN is trained to map the concatenated frames as a whole to the HMM state corresponding to the center frame while the side frames close to both ends of the concatenated frames and the remaining central frames are treated as equally important. Tho...

متن کامل

Training of Perceptron Neural Network Using Piecewise Linear Activation Function

A new Perceptron training algorithm is presented, which employs the piecewise linear activation function and the sum of squared differences error function over the entire training set. The most commonly used activation functions are continuously differentiable such as the logistic sigmoid function, the hyperbolic-tangent and the arctangent. The differentiable activation functions allow gradient...

متن کامل

Learning Neural Network with Learning Rate Adaptation

In this chapter the analog VLSI implementation of a Multi Layer Perceptron (MLP) network with on-chip learning capability is presented. A MLP architecture is chosen since it can be applied to successfully solve real-world tasks, see among others [33, 6, 9,4]. Many examples of analog implementations of neural networks with on-chip learning capability have been presented in literature, for exampl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2020

ISSN: 2169-3536

DOI: 10.1109/access.2020.3002884